DATA608: Assignment 2

Eric Lehmphul

In [12]:
import datashader as ds
import datashader.transfer_functions as tf
import datashader.glyphs
from datashader import reductions
from datashader.core import bypixel
from datashader.utils import lnglat_to_meters as webm, export_image
from datashader.colors import colormap_select, Greys9, viridis, inferno
import copy


from pyproj import Proj, transform
import numpy as np
import pandas as pd
import urllib
import json
import datetime
import colorlover as cl

import plotly.offline as py
import plotly.graph_objs as go
from plotly import tools

# from shapely.geometry import Point, Polygon, shape
# In order to get shapley, you'll need to run [pip install shapely.geometry] from your terminal

from functools import partial

from IPython.display import GeoJSON

py.init_notebook_mode()

For module 2 we'll be looking at techniques for dealing with big data. In particular binning strategies and the datashader library (which possibly proves we'll never need to bin large data for visualization ever again.)

To demonstrate these concepts we'll be looking at the PLUTO dataset put out by New York City's department of city planning. PLUTO contains data about every tax lot in New York City.

PLUTO data can be downloaded from here. Unzip them to the same directory as this notebook, and you should be able to read them in using this (or very similar) code. Also take note of the data dictionary, it'll come in handy for this assignment.

In [5]:
# Code to read in v17, column names have been updated (without upper case letters) for v18

# bk = pd.read_csv('PLUTO17v1.1/BK2017V11.csv')
# bx = pd.read_csv('PLUTO17v1.1/BX2017V11.csv')
# mn = pd.read_csv('PLUTO17v1.1/MN2017V11.csv')
# qn = pd.read_csv('PLUTO17v1.1/QN2017V11.csv')
# si = pd.read_csv('PLUTO17v1.1/SI2017V11.csv')

# ny = pd.concat([bk, bx, mn, qn, si], ignore_index=True)

ny = pd.read_csv('pluto_22v2.csv')

# list(ny.columns)

# Getting rid of some outliers
ny = ny[(ny['yearbuilt'] > 1850) & (ny['yearbuilt'] < 2020) & (ny['numfloors'] != 0)]

I'll also do some prep for the geographic component of this data, which we'll be relying on for datashader.

You're not required to know how I'm retrieving the lattitude and longitude here, but for those interested: this dataset uses a flat x-y projection (assuming for a small enough area that the world is flat for easier calculations), and this needs to be projected back to traditional lattitude and longitude.

In [6]:
# wgs84 = Proj("+proj=longlat +ellps=GRS80 +datum=NAD83 +no_defs")
# nyli = Proj("+proj=lcc +lat_1=40.66666666666666 +lat_2=41.03333333333333 +lat_0=40.16666666666666 +lon_0=-74 +x_0=300000 +y_0=0 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs")
# ny['xcoord'] = 0.3048*ny['xcoord']
# ny['ycoord'] = 0.3048*ny['ycoord']
# ny['lon'], ny['lat'] = transform(nyli, wgs84, ny['xcoord'].values, ny['ycoord'].values)

# ny = ny[(ny['lon'] < -60) & (ny['lon'] > -100) & (ny['lat'] < 60) & (ny['lat'] > 20)]

#Defining some helper functions for DataShader
background = "black"
export = partial(export_image, background = background, export_path="export")
cm = partial(colormap_select, reverse=(background!="black"))

Part 1: Binning and Aggregation

Binning is a common strategy for visualizing large datasets. Binning is inherent to a few types of visualizations, such as histograms and 2D histograms (also check out their close relatives: 2D density plots and the more general form: heatmaps.

While these visualization types explicitly include binning, any type of visualization used with aggregated data can be looked at in the same way. For example, lets say we wanted to look at building construction over time. This would be best viewed as a line graph, but we can still think of our results as being binned by year:

In [7]:
trace = go.Scatter(
    # I'm choosing BBL here because I know it's a unique key.
    x = ny.groupby('yearbuilt').count()['bbl'].index,
    y = ny.groupby('yearbuilt').count()['bbl']
)

layout = go.Layout(
    xaxis = dict(title = 'Year Built'),
    yaxis = dict(title = 'Number of Lots Built')
)

fig = go.FigureWidget(data = [trace], layout = layout)

fig

Something looks off... You're going to have to deal with this imperfect data to answer this first question.

But first: some notes on pandas. Pandas dataframes are a different beast than R dataframes, here are some tips to help you get up to speed:


Hello all, here are some pandas tips to help you guys through this homework:

Indexing and Selecting: .loc and .iloc are the analogs for base R subsetting, or filter() in dplyr

Group By: This is the pandas analog to group_by() and the appended function the analog to summarize(). Try out a few examples of this, and display the results in Jupyter. Take note of what's happening to the indexes, you'll notice that they'll become hierarchical. I personally find this more of a burden than a help, and this sort of hierarchical indexing leads to a fundamentally different experience compared to R dataframes. Once you perform an aggregation, try running the resulting hierarchical datafrome through a reset_index().

Reset_index: I personally find the hierarchical indexes more of a burden than a help, and this sort of hierarchical indexing leads to a fundamentally different experience compared to R dataframes. reset_index() is a way of restoring a dataframe to a flatter index style. Grouping is where you'll notice it the most, but it's also useful when you filter data, and in a few other split-apply-combine workflows. With pandas indexes are more meaningful, so use this if you start getting unexpected results.

Indexes are more important in Pandas than in R. If you delve deeper into the using python for data science, you'll begin to see the benefits in many places (despite the personal gripes I highlighted above.) One place these indexes come in handy is with time series data. The pandas docs have a huge section on datetime indexing. In particular, check out resample, which provides time series specific aggregation.

Merging, joining, and concatenation: There's some overlap between these different types of merges, so use this as your guide. Concat is a single function that replaces cbind and rbind in R, and the results are driven by the indexes. Read through these examples to get a feel on how these are performed, but you will have to manage your indexes when you're using these functions. Merges are fairly similar to merges in R, similarly mapping to SQL joins.

Apply: This is explained in the "group by" section linked above. These are your analogs to the plyr library in R. Take note of the lambda syntax used here, these are anonymous functions in python. Rather than predefining a custom function, you can just define it inline using lambda.

Browse through the other sections for some other specifics, in particular reshaping and categorical data (pandas' answer to factors.) Pandas can take a while to get used to, but it is a pretty strong framework that makes more advanced functions easier once you get used to it. Rolling functions for example follow logically from the apply workflow (and led to the best google results ever when I first tried to find this out and googled "pandas rolling")

Google Wes Mckinney's book "Python for Data Analysis," which is a cookbook style intro to pandas. It's an O'Reilly book that should be pretty available out there.


Question

After a few building collapses, the City of New York is going to begin investigating older buildings for safety. The city is particularly worried about buildings that were unusually tall when they were built, since best-practices for safety hadn’t yet been determined. Create a graph that shows how many buildings of a certain number of floors were built in each year (note: you may want to use a log scale for the number of buildings). Find a strategy to bin buildings (It should be clear 20-29-story buildings, 30-39-story buildings, and 40-49-story buildings were first built in large numbers, but does it make sense to continue in this way as you get taller?)

In [173]:
# Start your answer here, inserting more cells as you go along

# Create a copy of the original data frame that will be modified to create the necessary visualization 
q1 = ny.copy() 

# Create bins to store the rows that meet a certain floor limit. 
# As there was no predefined criteria for how the bins should be declared I elected to use the following bins:
# 1-3 floors, 4-6 floors, 7-10 floors, 11-20 floors, 21-30 floors, 31-40 floors, 41-50 floors, and 50+ floors

bins = [0, 3, 6, 10, 20, 30, 40, 50, 110]
bin_names = ["1_to_3_floors", "4_to_6_floors", "7_to_10_floors", "11_to_20_floors",
             "21_to_30_floors", "31_to_40_floors", "41_to_50_floors", "taller_than_50_floors"]

# Sort data into bins

q1["Bin"] = pd.cut(q1["numfloors"], bins, labels = bin_names)
In [174]:
# Subset data needed to produce graph
q1_data = q1.loc[:, ["yearbuilt", "numfloors", "Bin"]]
In [175]:
# Get the number of building built grouped by year and number of floors
q1_data1 = q1_data.groupby( ['yearbuilt', 'Bin'] ).size().reset_index()

# update newly created variable name to numbuildings 
q1_data1.rename(columns = {0:'numbuildings'}, inplace = True)

print(q1_data1.head(10))
   yearbuilt                    Bin  numbuildings
0     1851.0          1_to_3_floors            50
1     1851.0          4_to_6_floors            40
2     1851.0         7_to_10_floors             2
3     1851.0        11_to_20_floors             0
4     1851.0        21_to_30_floors             0
5     1851.0        31_to_40_floors             0
6     1851.0        41_to_50_floors             0
7     1851.0  taller_than_50_floors             0
8     1852.0          1_to_3_floors           126
9     1852.0          4_to_6_floors            85
In [176]:
# Create pivot table to easily create visualization in plotly
q1_data2 = q1_data1.pivot_table(index=["yearbuilt"], 
                    columns='Bin', 
                    values='numbuildings').fillna(0).reset_index()

# Grouping the pivot table by every decade
num_years = 10
q1_graph_data = q1_data2.groupby(data.index // num_years).sum()

# updating the yearbuilt column as it also underwent the sum function
for i in q1_graph_data.index:
    q1_graph_data.loc[i, ['yearbuilt']] = (1860 + 10*i)
In [177]:
# A look at the graph data
q1_graph_data
Out[177]:
Bin yearbuilt 1_to_3_floors 4_to_6_floors 7_to_10_floors 11_to_20_floors 21_to_30_floors 31_to_40_floors 41_to_50_floors taller_than_50_floors
0 1860.0 1043 742 12 3 0 0 0 0
1 1870.0 982 599 13 2 0 0 0 0
2 1880.0 1938 1089 22 2 0 0 0 0
3 1890.0 3320 2231 74 12 1 1 1 0
4 1900.0 24066 6793 386 84 8 0 0 0
5 1910.0 65652 10361 507 382 14 2 1 2
6 1920.0 98530 8671 508 489 36 3 2 1
7 1930.0 152111 10301 505 947 127 33 12 7
8 1940.0 93065 6064 134 155 18 11 7 5
9 1950.0 73973 833 90 100 13 2 1 1
10 1960.0 67794 1183 224 330 56 12 5 0
11 1970.0 40772 970 223 521 177 60 28 4
12 1980.0 19995 413 112 149 71 74 22 7
13 1990.0 24284 455 168 165 91 79 45 12
14 2000.0 28496 833 165 71 32 35 13 6
15 2010.0 34624 3964 933 360 88 56 28 26
16 2020.0 9264 3406 1060 444 139 66 41 42
In [233]:
# Create a stacked bar graph of the number of buildings built each decade based on the number of floors.
fig = go.Figure(data = [
    go.Bar(name = "1_to_3_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["1_to_3_floors"])),
    go.Bar(name = "4_to_6_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["4_to_6_floors"])),
    go.Bar(name = "7_to_10_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["7_to_10_floors"])),
    go.Bar(name = "11_to_20_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["11_to_20_floors"])),
    go.Bar(name = "21_to_30_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["21_to_30_floors"])),
    go.Bar(name = "31_to_40_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["31_to_40_floors"])),
    go.Bar(name = "41_to_50_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["41_to_50_floors"])),
    go.Bar(name = "taller_than_50_floors", x = q1_graph_data["yearbuilt"], y = np.log10(q1_graph_data["taller_than_50_floors"])),
])

fig.update_layout(barmode='stack',
                 title = "NYC: Number of Buildings by Floor per Decade",
                 xaxis_title = "Decade",
                 yaxis_title = "Number of Buildings (log10)",
                 legend_title = "Floor Categories")
fig.show()
C:\Users\ericl\anaconda3\lib\site-packages\pandas\core\arraylike.py:364: RuntimeWarning:

divide by zero encountered in log10

In [234]:
# Show .png of plotly plot 
fig.show(renderer="png")

Part 2: Datashader

Datashader is a library from Anaconda that does away with the need for binning data. It takes in all of your datapoints, and based on the canvas and range returns a pixel-by-pixel calculations to come up with the best representation of the data. In short, this completely eliminates the need for binning your data.

As an example, lets continue with our question above and look at a 2D histogram of YearBuilt vs NumFloors:

In [8]:
fig = go.FigureWidget(
    data = [
        go.Histogram2d(x=ny['yearbuilt'], y=ny['numfloors'], autobiny=False, ybins={'size': 1}, colorscale='Greens')
    ]
)

fig

This shows us the distribution, but it's subject to some biases discussed in the Anaconda notebook Plotting Perils.

Here is what the same plot would look like in datashader:

In [9]:
#Defining some helper functions for DataShader
background = "black"
export = partial(export_image, background = background, export_path="export")
cm = partial(colormap_select, reverse=(background!="black"))

cvs = ds.Canvas(800, 500, x_range = (ny['yearbuilt'].min(), ny['yearbuilt'].max()), 
                                y_range = (ny['numfloors'].min(), ny['numfloors'].max()))
agg = cvs.points(ny, 'yearbuilt', 'numfloors')
view = tf.shade(agg, cmap = cm(Greys9), how='log')
export(tf.spread(view, px=2), 'yearvsnumfloors')
Out[9]:

That's technically just a scatterplot, but the points are smartly placed and colored to mimic what one gets in a heatmap. Based on the pixel size, it will either display individual points, or will color the points of denser regions.

Datashader really shines when looking at geographic information. Here are the latitudes and longitudes of our dataset plotted out, giving us a map of the city colored by density of structures:

In [10]:
NewYorkCity   = (( 913164.0,  1067279.0), (120966.0, 272275.0))
cvs = ds.Canvas(700, 700, *NewYorkCity)
agg = cvs.points(ny, 'xcoord', 'ycoord')
view = tf.shade(agg, cmap = cm(inferno), how='log')
export(tf.spread(view, px=2), 'firery')
Out[10]:

Interestingly, since we're looking at structures, the large buildings of Manhattan show up as less dense on the map. The densest areas measured by number of lots would be single or multi family townhomes.

Unfortunately, Datashader doesn't have the best documentation. Browse through the examples from their github repo. I would focus on the visualization pipeline and the US Census Example for the question below. Feel free to use my samples as templates as well when you work on this problem.

Question

You work for a real estate developer and are researching underbuilt areas of the city. After looking in the Pluto data dictionary, you've discovered that all tax assessments consist of two parts: The assessment of the land and assessment of the structure. You reason that there should be a correlation between these two values: more valuable land will have more valuable structures on them (more valuable in this case refers not just to a mansion vs a bungalow, but an apartment tower vs a single family home). Deviations from the norm could represent underbuilt or overbuilt areas of the city. You also recently read a really cool blog post about bivariate choropleth maps, and think the technique could be used for this problem.

Datashader is really cool, but it's not that great at labeling your visualization. Don't worry about providing a legend, but provide a quick explanation as to which areas of the city are overbuilt, which areas are underbuilt, and which areas are built in a way that's properly correlated with their land value.

In [227]:
# Create copy of original data to be used to answer question 2
q2 = ny.copy()

# need to calculate building assessment as it is not given
q2["assessbuilding"] = q2["assesstot"] - q2["assessland"]

# get necessary variables and store in a new data frame object
q2_data = q2[["assessland", "assessbuilding", "xcoord", "ycoord"]]
In [228]:
# The blog post suggested using 3 classes per variable to allow for 9 total classes
num_of_classes = 3

# Create the bins for land assessment value
land_value_classification = pd.qcut(q2_data['assessland'], 
                                  num_of_classes, 
                                  labels=['land_val_1', 'land_val_2', 'land_val_3'])

q2_data.insert(4, "LandValueRank", land_value_classification)


# Create the bins for building assessment value
building_value_classification = pd.qcut(q2_data['assessbuilding'], 
                                  num_of_classes, 
                                  labels=['building_val_1', 'building_val_2', 'building_val_3'])

q2_data.insert(5, "BuildingValueRank", building_value_classification)


# Create bivariate classes by concatenating the LandValueRank and BuildingValueRank
bivariate_classification = pd.Categorical(q2_data['LandValueRank'].astype(str) + "/" +
                                    q2_data['BuildingValueRank'].astype(str))

q2_data.insert(6, "BivariateClass", bivariate_classification)
In [232]:
# Quick look at data before graphing
q2_data.head()
Out[232]:
assessland assessbuilding xcoord ycoord LandValueRank BuildingValueRank BivariateClass
0 10560.0 46020.0 938611.0 161974.0 land_val_1 building_val_2 land_val_1/building_val_2
2 8100.0 308700.0 1000194.0 180391.0 land_val_1 building_val_3 land_val_1/building_val_3
3 17280.0 40740.0 1006390.0 189391.0 land_val_2 building_val_2 land_val_2/building_val_2
4 20040.0 46380.0 1000344.0 180415.0 land_val_3 building_val_2 land_val_3/building_val_2
5 20520.0 78480.0 1000192.0 180290.0 land_val_3 building_val_3 land_val_3/building_val_3
In [231]:
# I elected to use a palette from https://tolomaps.tumblr.com/post/131671267233/creating-a-bivariate-choropleth-color-scheme, where dark purple indicates high land value and high building value whereas white indicates low land value and low building value.
# Red indicates overbuilt as building is a high value and land is low value, whereas blue indicates underbuilt as land value is high and building value is low

map_colors = {'land_val_1/building_val_1': '#dddddd', 'land_val_2/building_val_1': '#7bb3d1', 'land_val_3/building_val_1': '#016eae', 
          'land_val_1/building_val_2': '#dd7c8a', 'land_val_2/building_val_2': '#8d6c8f', 'land_val_3/building_val_2': '#4a4779', 
          'land_val_1/building_val_3': '#cc0024', 'land_val_2/building_val_3': '#8a274a', 'land_val_3/building_val_3': '#4b264d'}

# Creates the bivariate chloropleth map
NewYorkCity   = (( 913164.0,  1067279.0), (120966.0, 272275.0))
cvs = ds.Canvas(700, 700, *NewYorkCity)
agg = cvs.points(q2_data, 'xcoord', 'ycoord', ds.count_cat('BivariateClass'))
view = tf.shade(agg, color_key = map_colors)
export(tf.spread(view, px=2), 'q2_map')
Out[231]:

As mentioned above, there is reason to believe that assessland and asessbuilding are correlated. Any area that does not match this correlation assumption could be deemed as underbuilt or overbuilt, depending on the color value shown on the map.

As expected, Manhattan has a properly built on the land (both the land and the buildings are expensive). Both Brooklyn and Queens are properly using land that is closest to Manhattan. Interestingly, Staten Island and the Bronx have vast amounts of lots that are correlated b\wih a low land and low building value assessment.

The Northern parts of Queens, probably with more difficult access to Manhattan, appears to be overbuilt.

Staten Island and Queens have a noticable amount of underbuilt land tha can be capitalized on by a real estate developer.

In [ ]: